30 research outputs found

    Standardized or simple effect size: what should be reported?

    Get PDF
    It is regarded as best practice for psychologists to report effect size when disseminating quantitative research findings. Reporting of effect size in the psychological literature is patchy – though this may be changing – and when reported it is far from clear that appropriate effect size statistics are employed. This paper considers the practice of reporting point estimates of standardized effect size and explores factors such as reliability, range restriction and differences in design that distort standardized effect size unless suitable corrections are employed. For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: i) a preference for simple effect size over standardized effect size, and ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers

    Understanding statistical power in the context of applied research

    Get PDF
    Estimates of statistical power are widely used in applied research for purposes such as sample size calculations. This paper reviews the benefits of power and sample size estimation and considers several problems with the use of power calculations in applied research that result from misunderstandings or misapplications of statistical power. These problems include the use of retrospective power calculations and standardized measures of effect size. Methods of increasing the power of proposed research that do not involve merely increasing sample size (such as reduction in measurement error, increasing ‘dose’ of the independent variable and optimizing the design) are noted. It is concluded that applied researchers should consider a broader range of factors (other than sample size) that influence statistical power, and that the use of standardized measures of effect size should be avoided (except as intermediate stages in prospective power or sample size calculations)

    The case for new academic workspace

    Get PDF
    Executive summary: This report draws upon the combined efforts of a number of estates professionals, architects, academics, designers, and senior managers involved in the planning of new university buildings for the 21st century. Across these perspectives, all would agree – although perhaps for different reasons - that this planning is difficult and that a number of particular considerations apply in the design of academic workspaces. Despite these difficulties, they will also agree that when this planning goes well, ‘good’ buildings are truly transformational – for both the university as a whole and the people who work and study in them. The value of well-designed buildings goes far beyond their material costs, and endures long after those costs have been forgotten ..

    Improving psychological science: further thoughts, reflections and ways forward

    Get PDF
    Cogent Psychology is a pioneering and dynamic Open Access journal for the psychology community, publishing original research, reviews, and replications that span the full spectrum of psychological inquiry. In 2021, it relaunched with a new Editor-in-Chief and Section Editors with an exciting vision to combine open access publishing with open research practices. As such, the journal welcomes traditional and new article formats, including Registered Reports, Brief Replication Reports, Review Articles, and Brief Reports. This broader range of formats is designed to reflect the evolving nature of psychological research and open science approaches. To the best of our knowledge, no other psychology journal offers such a distinctive combination of article publishing formats. Moreover, we welcome submissions in nine key areas of psychological science: Clinical Psychology, Cognitive & Experimental Psychology, Developmental Psychology, Educational Psychology, Health Psychology, Neuropsychology, Personality & Individual Differences, Social Psychology and Work, Industrial & Organisational Psychology

    Justify your alpha

    Get PDF
    Benjamin et al. proposed changing the conventional “statistical significance” threshold (i.e.,the alpha level) from p ≤ .05 to p ≤ .005 for all novel claims with relatively low prior odds. They provided two arguments for why lowering the significance threshold would “immediately improve the reproducibility of scientific research.” First, a p-value near .05provides weak evidence for the alternative hypothesis. Second, under certain assumptions, an alpha of .05 leads to high false positive report probabilities (FPRP2 ; the probability that a significant finding is a false positive

    Justify your alpha

    Get PDF
    In response to recommendations to redefine statistical significance to p ≤ .005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level
    corecore